Search Results: "codehelp"

18 November 2012

Neil Williams: Long term maintenance of perl cross-build support in Debian

After prompts from Wookey and Steve McIntyre, I decided to look at #285559 and #633884 for perl cross-build support and then port that support forward to the current perl in Wheezy and on to the version of perl currently in experimental. The first patch is for perl 5.8, the second for perl 5.12, neither of which is available currently in Debian. snapshot.debian.org provided the 5.12 source but then that no longer cross-builds with the patch.

The problem, as with any cross build, is that the build must avoid trying to execute binaries compiled within the build to achieve the test results required by ./configure (or in the case of perl, Configure). dpkg-cross has one collection of cache values but the intention was always to migrate the package-specific support data into the packages themselves and keep the architecture-specific data in dpkg-cross or dpkg-dev. Therefore, the approach taken in #633884 would be correct, if only there was a way of ensuring that the cached values remain in sync with the relevant Debian package.

I'll note here that I am aware of other ways of cross-building perl, this is particularly concerned with cross-building the Debian configuration of perl as a Debian package and using Debian or Emdebian cross-compilers. After all, the objective is to support bootstrapping Debian onto new architectures. However, I fully expect this to be just as usable with Ubuntu packages of perl compiled with, e.g. Linaro cross-compilers but I haven't yet looked at the differences between perl in Debian vs Ubuntu in any detail.

I've just got perl 5.14.2 cross-building for armel using the Emdebian gcc-4.4 cross-compiler (4.4.5-8) on a Debian sid amd64 machine without errors (it needs testing, which I'll look at later), so now is the time to document how it is done and what needs to be fixed. I've already discussed part of this with the current perl maintainers in Debian and, subject to just how the update mechanism works, have outline approval for pushing these changes into the Debian package and working with upstream where appropriate. The cache data itself might live in a separate source package which will use a strict dependency on perl to ensure that it remains in sync with the version which it can cross-build. Alternatively, if I can correctly partition the cache data between architecture-specific (and therefore generated from the existing files) and package_$version specific, then it may be possible to push a much smaller patch into the Debian perl package. This would start with some common data, calculate the arch-specific data and then look for some version-specific data, gleaned from Debian porter boxes whilst the version is in Debian experimental.

The key point is that I've offered to provide this support for the long term, ensuring that we don't end up with future stable releases of Debian having a perl package which cannot be cross-built. (To achieve that, we will also end up with versions of perl in Debian testing which also cross-build.)

This cross-build is still using dpkg-cross paths, not MultiArch paths, and this will need to be changed eventually. (e.g. by the source package providing two binaries, one which uses MultiArch and one which expects dpkg-cross paths.) The changes include patches for the upstream Makefile.SH, debian/rules and the cache data itself. Depending on where the cache data finally lives, the new support might or might not use the upstream Cross/ directory as the current contents date from the Zaurus support and don't appear to be that useful for current versions of perl.

The cache data itself has several problems:

  1. It is tightly tied to the version of perl which generated it.

  2. It is, as expected, architecture-dependent

  3. It is, unfortunately, very sensitive to the specific configuration used by the distribution itself


That last point is important because it means that the cache data is not useful upstream as a block. It also means that generating the cache data for a specific Debian package means running the generation code on the native architecture with all of the Debian build-dependencies installed for the full perl build. This is going to complicate the use of this method for new architectures like arm64.

My objective for the long term maintenance of this code is to create sufficient data that a new architecture can be bootstrapped by judicious use of some form of template. Quite how that works out, only time will tell. I expect that this will involve isolating the data which is truly architecture-specific which doesn't change between perl versions from the data which is related to the tests for build-dependencies which does change between perl versions and then work out how to deal with any remainder. A new architecture for a specific perl version should then just be a case of populating the arch-specific data such as the size of a pointer/char and the format specifiers for long long etc. alongside the existing (and correct) data for the current version of perl.

Generating the cache data natively

The perl build repeats twice (three builds in total) and each build provides and requires slightly different cache data - static, debug and shared. Therefore, the maintenance code will need to provide a script which can run the correct configuration step for each mode, copy out the cache data for each one and clean up. The script will need to run inside a buildd chroot on a porter box (I'm looking at using abel.debian.org and harris.debian.org for this work so far) so that the derived data matches what the corresponding Debian native build would use. The data then needs slight modification - typically to replace the absolute paths with PERL_BUILD_DIR. It may also be necessary to change the value of cc, ranlib and other compiler-related values to the relevant cross-compiler executables. That should be possible to arrange within the build of the cache data support package itself, allowing new cache files to be dropped in directly from the porter box.

The configuration step may need to be optimised within debian/rules of perl itself as it currently proceeds on from the bare configuration to do some actual building but I need to compare the data to see if a bare config is modified later. The test step can be omitted already. Each step is performed as:

DEB_BUILD_OPTIONS="nocheck" fakeroot debian/rules perl.static


That is repeated for perl.debug and libperl.so.$(VERSION) where $VERSION comes from :

/bin/bash debian/config.debian --full-version


The files to be copied out are:


There is a lot of scope for templating of some form here, e.g. config.h.debug is 4,686 lines long but only 41 of those lines differ between amd64 and armhf for the same version of perl (and some of those can be identified from existing architecture-specific constants) which should make for a much smaller patch.

Architecture-specific cache data for perl

So far, the existing patches only deal with armel and armhf. If I compare the differences between armel & armhf, it comes down to:

  1. compiler names (config.h.*)

  2. architecture names (config.sh.*)

  3. architecture-based paths (config.sh.*)


However, comparing armel and armhf doesn't provide sufficient info for deriving arm64 or mips etc. Comparing the same versions for armhf and amd64 shows the range of differences more clearly. Typical architecture differences exist like the size of a long, flags to denote if the compiler can cast negative floats to 32bit ints and the sprintf format specifier strings for handling floats and doubles. The data also includes some less expected ones like:

armhf: #define Netdb_host_t const void * /**/
amd64: #define Netdb_host_t char * /**/


I'm not at all sure why that is arch-specific - if anyone knows, email codehelp @ d.o - same address if anyone fancies helping out ....

Cross-builds and debclean

When playing with the cross-build, remember to use the cross-build clean support, not just debclean:


dpkg-architecture -aarmel -c fakeroot debian/rules clean


That wasted quite a bit of my time initially with having to blow away the entire tree, unpack it from original apt sources and repatch it. (Once Wheezy is out, may actually investigate getting debclean to support the -a switch).

OK, that's an introduction, I'm planning on pushing the cross-build support code onto github soon-ish and doing some testing of the cross-built perl binaries in a chroot on an armel box. I'll detail that in another blog post when it's available.

Next step is to look at perl 5.16 and then current perl upstream git to see how to get Makefile.SH fixed for the long term.

2 November 2012

Neil Williams: Introducing pyBit - Buildd Integration Toolkit

pyBit - cross-platform package building using AMQP


Message queues provide a simple way to create a distributed, cross-platform buildd toolkit to build packages using a collection of buildds, direct from various VCS clients. pyBit is intended to support rapidly evolving software collections and can support multiple VCS frontends and multiple build backends. Cross building is expected to be supported for some backends. The initial backend uses dpkg for Debian with subversion providing the source and sbuild doing the actual build.

pyBit includes support for cancelling selected builds and using multiple buildd clients per architecture, per platform and per suite.

Hooks are available or in development for subversion and git, other VCS hooks can be added. A RESTful web API provides live build reports and can generate build jobs for specific packages using particular VCS branches on selected architectures to support re-building packages at any point in the development process. Build history is stored using postgresql.

Other buildd systems can rebuild long lists of packages or build lots of binary packages from relatively slow moving source packages. PyBit exists to handle much more rapid software development across a wide range of platforms, VCS inputs and architectures. Buildd clients which are under-used can be tasked with building multiple suites or adding cross-build support. Buildd clients which are over-utilised are easily identified and adding new machines to an existing architecture / platform / suite pool should be easy. Hook activation automatically cancels any ongoing build for the same architecture, platform and suite to avoid wasting time on an interim version.

The emphasis in pyBit is to have fast builds with redundant clients, reliable reporting using a flexible and intuitive frontend. To this end, there is no need for a source package to be uploaded. Depending on the VCS hook in use, builds can happen every time a particular file is changed (e.g. debian/changelog for a native Debian package) or at every push (for a distributed VCS) or whatever is appropriate for a particular software collection. Builds are checked for available build-dependencies and packages re-queued if the build-dependencies are not yet met.

In the longer term, it may well be possible to use more than one server / database combination to support more builds and more platforms.

So far, we've got to a working model and tagged 0.1.0 as the first downloadable release. There is a lot more to do, especially adding more documentation, more VCS hooks, support for more VCS methods on the buildd clients and more buildd client scripts for platforms other than Debian. (The git hook and git source client are expected to let pyBit be self-buildable but a certain amount of configuration will be required for the server and each client which makes it not-quite self-hosting.)

pyBit concentrates on preparing collections of binary packages which can be used to build others, rather than trying to rebuild everything every time - this allows more rapid upstream software development and encourages modular, re-usable software. pyBit will also support rebuilding specific versions, architectures, suites or platforms via the RESTful web API. Access to this frontend can be controlled through any of the standard methods.

Components are loosely coupled via JSON encoded messages sent using rabbitMQ and curl. A new client can be added at any time and it will simply pick up buildd jobs from the relevant queue.

Development has now moved to Github and if anyone wants to look at new clients and new hooks, just contact the team by email or via IRC (#pybit on irc.oftc.net). Current development is based on Debian Squeeze 6.0.6 with backports and also on Debian Wheezy - more testing is welcome. Patches are also very welcome, pyBit is licensed under GPL2+.

There is no intrinsic reason why pyBit could not support any buildd platform capable of taking source from a known location and building a set of binaries. Packages are added to the queues whenever the hook is activated, so adding new packages to the collection is simply a case of triggering the existing hook.

Another interesting challenge for pyBit would be to trigger a hook on a new source code repository and just let it work through every package until everything is built. That probably won't work with the current 0.1.0 code but if that is what you'd like to work on, join the team.

19 June 2012

Neil Williams: subversion 1.7.5 insane upgrade requirement wasting days and days of effort

Upgrading the local version of subversion against a large subversion repository has so far taken three DAYS. It goes through this multi-gigabyte repository in no perceivable order, it goes through every single directory in every single tag and every single branch and refuses to run in any sub-directory. (Even after clearing out some older tags, the tags/ directory is still nearly 8Gb and that doesn't include any binary files.) Each tag within tags/ is 500Mb and contains nearly 6000 sub-directories. Yes, it's large but svn upgrade should still be able to handle it without crippling every machine.

I can't image how svn is going to manage when the server finally gets upgraded. Probably be quicker to dump the entire repo and reimport it.

When I finally give up the will to proceed or simply need to use this LAPTOP for something else, I have to interrupt it with Ctrl-C at which point is starts all over again!

This isn't a slow machine, it's a i5 quad-core T410 with 8Gb of RAM and 1Tb of storage - and svn has made it crawl for days. The only way to pair down the working copy is to delete every tag and branch which means losing data like old build logs and old packages. The repository is this large because it's tracking the development of multiple commercial products which share common code but which also have numerous releases and release updates.

I can't even use svn st on this repo without this completing - I haven't been able to work on this repo since this started. Absolutely insane.

Pondering filing an RC bug on the basis of unjustifiable data loss. We have many machines at work with this repository checked out and if we ever migrate to Wheezy, it's going to mean a WEEK of lost work!!

subversion                    1.7.5-1


So, a warning for anyone else using subversion 1.7.5 with a very large repository: DELETE ALL TAGS and BRANCHES and any other directory anywhere in every single working copy tree on every machine which ever wants to use that copy again before even thinking about upgrading to 1.7.5.

The tags directories don't even need to be upgraded because we only use those to rebuild the code as it was in chroots.

Unspeakably furious about such a completely dumb tool being thrown into the mix DAYS before the Wheezy freeze. Now I have to kill it AGAIN just so that I can suspend the laptop. IDIOTS.

9 June 2012

Neil Williams: Going to Nicaragua



Not DebCamp this year, but I intend to get to more talks than I did in Bosnia and still get some work done on Emdebian Integration into Debian as well as working on as many RC bugs as I can manage during the week.

Possibly easy targets via UDD

A couple of my usual RC bug filter queries:


wheezy-and-sid, ignoring patch, pending, security, claimed and not in main.
(483 bugs)


wheezy-or-sid, ignoring merged and done
(987 bugs)

24 March 2012

Neil Williams: Multiarch and debi

If you're in the habit of doing sudo debi at the end of a build, it's worth noting a complication with Multiarch.

Sometimes, it's tempting to rebuild (and install) a shared library with a few untested changes, however, if (like me), you have multiarch packages installed, this can cause a surprise:


dpkg: error processing ../libqof2_0.8.4-2_amd64.deb (--unpack):
trying to overwrite shared '/usr/share/doc/libqof2/changelog.Debian.gz', which is different from other instances of package libqof2:amd64


Bumping the changelog doesn't help:


dpkg: error processing libqof2:amd64 (--install):
package libqof2:amd64 0.8.4-3 cannot be configured because libqof2:armel is at a different version (0.8.4-2)
dpkg: error processing libqof2:armel (--install):
package libqof2:armel 0.8.4-2 cannot be configured because libqof2:amd64 is at a different version (0.8.4-3)


The fix for this is, of course, to remove libqof2:armel and all it's reverse dependencies and the packages can't be put back until you've built an armel version of your changes.

The way to backout the test change is a bit longwinded, depending on the complexity of your library:


sudo apt-get --reinstall install libqof2:amd64=0.8.4-2 libqof-dev:amd64=0.8.4-2 libqof2-backend-qsf:amd64=0.8.4-2 libqof2-backend-sqlite:amd64=0.8.4-2 libqof2-dbg:amd64=0.8.4-2 libqof2:armel=0.8.4-2


Hmm. I think a helper tool could be needed in the medium term here.

This also means that cross-building Emdebian Crush packages from a Wheezy / Wheezy+1 base is again looking as if it will need to happen inside a chroot, albeit with problems for the packages which are dependencies of the build tools (or the cross toolchain) itself, as the Crush packages will inevitably contain modified files. (For example, Emdebian Policy differs from Debian Policy by requiring - not forbidding - compression of debian/copyright.)

Cross-building packages which are intended to be binary compatible with Debian without conversion (including cross-built packages to use on top of Emdebian Grip) shouldn't be too affected. You just need to ensure that where the package exists in two or more versions, all installed architectures are built before any architecture is installed.

This is an additional burden compared to the world of dpkg-cross but it is key to how Multiarch allows cross-building to use sensible dependency resolution.

So, what I think we'll need is an enhancement to debi which can be passed an architecture (possibly an architecture list), debi then looks for both the native arch and the requested architecture(s), fails> if the .changes file for the extra architecture(s) isn't found or proceeds to install all packages for the native arch and arch-dependent packages for the second architecture in the same dpkg operation. As a further enhancement, debi might be able to check dpkg --print-foreign-architectures, check the output of $package:$arch against dpkg -l and either complain or automatically use the Multiarch support to look for the relevant .changes files. debi may even want to check the package (as it's working in the package top directory usually) for Multiarch support before accepting the architecture list option.

Think I need to do some reading of the debi source and see if a patch is workable...

4 March 2012

Neil Williams: Rubbish in the archive

I've been asked (or been criticised through indirect media) about why packages are removed and as I've just done quite a lot of removals at the Cambridge BSP (I've done some uploads too, it's not one-sided), I thought I'd explain my rationale in the hope that it will encourage others to remove more packages and fix those which warrant fixing. This, is an approximation of the kind of scoring I would use when assessing whether to fix a package or remove it. All bug counts are for the entire source package, including all bugs reported against binaries built from that source.



RUBBISH - easy to remember, maybe.

If someone comes up with a UDD query which can resolve the above as an algorithm, a) I'll buy them a beer at DebConf and b) it could be a useful addition to the PTS...

18 February 2012

Neil Williams: Introducing ladder

We needed a new package at work and once I'd written it, I realised that it could well be useful for others, so I did the ITP and the package is now in Debian unstable.

Ladder - Stepwise repository upgrade tool

Ladder creates a local repository of packages required
to upgrade a tarball of a chroot to a new milestone or
software release.
.
The repository can be signed and includes all specified
target packages and dependencies. The repository can then
be distributed and used to upgrade multiple devices in
sequence without needing network access.


The only dependencies are some simple perl modules, reprepro for the repository, apt and gnupg for the signing. None of those need to be particularly recent versions, so once ladder has had a little time in testing, I will be doing a backport to Squeeze. Lenny is a little less likely but eminently possible if someone specifically asks for it. Not that it particularly needs a backport mind, if you want to run ladder on some other Debian-based system, it's simply a case of installing the version in unstable. reprepro was in Lenny at a suitable version for what ladder requires, so any system remotely recent should be able to run ladder without changes.

Normally, whenever you need to upgrade a device, you need to get that device onto a network, get access over the serial port or some form of pre-configured network connection, run the commands and clean up afterwards. OK, now do that again... oh and here are another 40 or 100 devices which all need precisely the same thing done and they all have to be shipped to paying customers today. What does any geek do? Script it.

Ladder is part of that process. There would need to be some scripting / programming support on the devices concerned but ladder makes it easier to put that support in place at the design stage and then automate the actual update without the device needing to get onto a network and, potentially, allowing you to decide how to offer the upgrade (automatic, user involvement, engineers only etc.) Once that is implemented, the device is simply pointed at a locally mounted filesystem which contains the ladder step for the migration. Each step is a SecureApt signed repository which is available at a deterministic location which can be easily scripted and which contains only the packages necessary (with dependencies) to upgrade any device installed at Milestone A to Milestone B. Nice and small, only the stuff you need, whatever architecture you like and if you need to migrate from Milestone A to Milestone D, then you create a few steps on the ladder and automate the migration from A to B to C to D. All you need is enough space on whatever local media you need to use to plug into the devices. Then copy the media enough times, start the interface on each device and do something useful whilst the upgrade happens. Then, when the next lot need to be upgraded, you already have the media... Oh and ladder does not need to run on the same architecture as the devices - all package handling is entirely architecture-neutral. So create the ladder steps and media on your fast x86 machines and then let the slower SSD devices take their own sweet time via automation. Simple - maybe, hopefully.

I'm expressly talking about devices here because this is where the original requirement arose - embedded devices which don't have direct / automatic / accessible network ports or even a working network configuration preset. These aren't new images to flash onto the device either, these are apt repositories, so only the stuff which needs to be changed is changed which saves a lot of time writing to slow SSD storage.

If the mechanism sounds familiar, yes, it's using the same principles as xapt, multistrap and emdebian-grip but I thought it could be useful, so I generalised the interface (a bit) and uploaded.

Let me know via the BTS if you need tweaks to the support. Ladder isn't particularly about getting desktop machines from Debian Potato to Debian Wheezy because you need to identify a target package to be the top point of the dependency chain which gets into the step repository and all devices should start with the same packages at the same versions. Normally, the kind of devices I use at work have a single top level application which is the sole user interface and which directly or indirectly depends on all the other software required for that device. Whilst the target package value could take a few carefully selected packages instead of just one and there is support for an extra packages field, it's not intended / expected to be useful for desktops. This is for use with multiple devices which all have the same package selection, are at the same milestone at the start and which all need to migrate smoothly to whatever milestone is intended - one step at a time. Ladder could be used to migrate between Debian releases and I've included example config files to support that, but the principle usage is with internal / proprietary repositories based on a common Debian stable platform where the user interface software is managed via identifiable milestones and where users have no direct control over installing packages. It's a production / manufacturing support tool more than a user / admin support tool. If you want to manage servers and desktops which allow users to install (or request to be installed) arbitrary packages, there are plenty of tools which already support this and are used by DSA in Debian for exactly these tasks. Ladder is not puppet but Linp didn't sound like a good name...

Ladder step repositories only include the binary packages, not sources - so if the migration is not going to be done in-house and you're expecting to distribute these steps to users somehow, ensure that the original source repository is available online for the Debian / free software packages concerned. If you can't do that (because the source is proprietary), you possibly shouldn't be distributing the binaries as a user update mechanism in the first place because each step will include packages from the core Debian system which need to have the sources available when distributed.

Inevitably, the ladder source package already contains two POT files for translations of the runtime messages and the POD based manpage. I expect to need to expand / clarify the manpage in due course, so don't rush to translate it yet as it's likely to change.

12 February 2012

Neil Williams: Multi-Arch progress

With dpkg from experimental and the new zlib upload in unstable, I've now got a partial Multi-Arch install. There are more packages necessary, particularly related to how -dev packages can exist and how a cross compiler gets built/installed in Multi-Arch world. One bug #659588 in libglib2.0-0 so far but that's quite good seeing as it's been all but impossible to test the Multi-Arch changes in existing packages until now.
$ dpkg --print-foreign-architectures
armel
i386

$ dpkg -l   cut -c -80   grep armel grep -v cross
ii gcc-4.6-base:armel 4.6.2-14
ii libc6:armel 2.13-26
ii libdatrie1:armel 0.2.5-3
ii libffi5:armel 3.0.10-3
ii libgcc1:armel 1:4.6.2-14
ii libgmp10:armel 2:5.0.4+dfsg-1
ii libgmpxx4ldbl:armel 2:5.0.4+dfsg-1
ii libgomp1:armel 4.6.2-14
ii libmpc-dev:armel 0.9-4
ii libmpc2:armel 0.9-4
ii libmpfr-dev:armel 3.1.0-3
ii libmpfr4:armel 3.1.0-3
ii libpcre3:armel 8.12-4
ii libpixman-1-0:armel 0.24.4-1
ii libpng12-0:armel 1.2.46-4
ii libpopt0:armel 1.16-3
ii libpwl5:armel 0.11.2-6
ii libselinux1:armel 2.1.0-4.1
ii libstdc++6:armel 4.6.2-14
ii libthai0:armel 0.1.16-3
ii linux-libc-dev:armel 3.2.4-1
ii zlib1g:armel 1:1.2.6.dfsg-1

There are more packages which can be installed on amd64 for i386 as #659588 only affects Multi-Arch versions which cannot execute compiled binaries from the foreign architecture.

Further progress, inside a test chroot, involves using gcc-4.7 from experimental but even then, libc6-dev is not installable as a Multi-Arch package. That's the current blocker for toolchain stuff.

ii gcc-4.7-base:i386 4.7-20120210-1
ii libgcc1:i386 1:4.7-20120210-1


Once we have libc6-dev:armel installable on i386 and amd64, work on cross-building Emdebian can be considered again. It's been a long time.

23 January 2012

Neil Williams: Emdebian Grip automated dependency resolution

The script isn't fully automated yet, I'm running it manually and watching for errors, but I do now have a simple script (~100 lines of perl) which runs edos-debcheck against each of the supported Emdebian architectures for unstable-grip and picks out those packages which are missing from the subset of packages which Emdebian monitors for Grip.

I was hoping to make these scripts into a normal Debian package but there are some limitations in that the scripts need to assume various pieces of Debian infrastructure. e.g. I started off using rsync to pull in the Packages files (along with Release and Sources) because apt has annoying behaviours with multiple architectures. (Emdebian Grip processes 7 architectures simultaneously on blavet.debian.org). Peter Palfrader kindly arranged for a local mirror to be available and now the entire Packages processing can be done using symlinks. Much, much better.

Actual package movement data comes directly from dak via projectb - something else which makes packaging these scripts for use on non debian.org boxes a bit hard.

Once again, I am indebted to the EDOS team (please, please keep working on edos-debcheck - it hasn't had uploads recently) because I tried to get this working with germinate but failed. (Colin - I'll be looking to borrow some of your time to work out what was going wrong.)

The dependency resolution script now means that I can meet release team requirements that when foo is updated in sid to depend on libbar2 but Emdebian Grip only has libbar1, the scripts will automatically pick up libbar2 and add it to Emdebian Grip. The current sid version gets pulled in and (the theory goes) will migrate into testing-grip as an almost automatically valid candidate. Processing of the dependency can happen within hours of the updated version of foo arriving in Emdebian Grip but as long as it happens within a few days I think everyone will be fine with it.

I've got a sync script too which can periodically run across the entire suite and check for packages which have slipped the net. I'll have to see how often that needs to run but once a week doesn't sound too painful.

Processing for Emdebian Grip is very fast. Most of the time is taken uploading:

2012-01-23 22:24:42 add sid-grip deb main armhf libfltk-images1.3 1.3.0-5em1
2012-01-23 22:24:52 add sid-grip deb main i386 libfltk-images1.3 1.3.0-5em1
2012-01-23 22:25:07 add sid-grip deb main mips libfltk-images1.3 1.3.0-5em1
2012-01-23 22:25:25 add sid-grip deb main powerpc libfltk-images1.3 1.3.0-5em1
2012-01-23 22:26:00 add sid-grip deb main amd64 libstdc++6-4.5-dev 4.5.3-12em1
2012-01-23 22:26:34 add sid-grip deb main armel libstdc++6-4.5-dev 4.5.3-12em1


(It was a lot longer, comparatively, when I was having to rsync and then dget instead of just read from the local filesystem.)

I've also been working on an init script which will gradually work through the subset of packages by order of Priority - approximately what a new architecture (like armhf) would do. Note that Emdebian Grip supports armhf already. The only slight concern is whether the init script needs to be throttled back to not flood the queues with hundreds of uploads an hour (which it could end up doing, at least for the first 24 hours).

Logs of package movements are available: http://www.emdebian.org/grip/logs.php

Packages can be searched too: http://www.emdebian.org/grip/search.php

More on Emdebian Integration on the Debian wiki: http://wiki.debian.org/EmdebianIntegration

Now all I need is a fix for #655919 :-) FTP team?

4 November 2011

Paul Wise: Migrating from Galeon to Iceweasel/Firefox

I have been a long-time user of the Galeon web browser, which, while powerful for its time, is getting a bit long in the teeth and has been abandoned for a long time. As a result I need to find something new. As Galeon uses the Mozilla engine I figure switching to Iceweasel/Firefox will be the least amount of pain since they share similar formats for lots of the user configuration and data (with the exception of history). Switching to Firefox also gives me access to a lot of configurability and a vast sea of extensions all written in RDF, JavaScript and zoooool (ahem, XUL). Another plus was that I was already using Firebug for the occasional web development project. Looking at the Mozilla addons site is like entering someone's shed. There will be the few beautiful unfinished projects still being worked on, one polished finished scuplture gathering dust but still admirable, things with power plugs from a bygone era, some things that have a coin slot on them, some cryptic machines with no visible screws or manuals, some spiders and their cobwebs and a few rats and mice chewing through things. A place where you can find some excellent, well documented, useful and Free Software extensions alongside lots of crud. Luckily for me the good stuff that I wanted to use was already in Debian or the friendly Debian Mozilla extension maintainers team was willing to package them for me. In my quest for freedom from Galeon I first noticed that there is no up button in Iceweasel. Bummer, I use that a lot so I went searching for extensions. I soon found one extension, but it hadn't kept up with the ever-changing Mozilla APIs so it fell by the wayside. Thanks to the leavers of breadcrumbs I picked up the trail to a new shiny and working extension. Lo-and-behold I found Uppity, which was all about going up and as it turns out, much better at that than Galeon. Thanks to MozExt team, thats solved, next! The next glaring problem was the lack of Galeon-style smart bookmarks. Before you ask, yes, Firefox smart keyword bookmarks are not the same thing. This was rediculously hard to search for due to the wording used by both projects being same same but different. Some folks switched to Epiphany to get the extra search boxes on their toolbars. Like this guy I was not interested in that, mainly due to the addons I would be missing out on. I tried a few different tacks, even searching for a way to have multiple search boxes in the Firefox toolbars. I soon gave up on finding an extension that would do this like Galeon does so I figured its time to roll up the sleeves and learn some zzzoooool. I already know a little bit about JavaScript and CSS so... First slap a dash of a tutorial about adding toolbar buttons, add a slither of adding extensions without installing them, stare down some Mozilla reference manuals, thrown in a pinch of favicons, give up on a wild goose chase or two, add a big fat blob of zoooool and sautee in fugly hacks. Soon enough you will have something hardcoded that works like Galeon smart bookmarks but looks even better. A screenshot of hacky galeon smart bookmarks in Iceweasel I may eventually turn this into a proper and functionally equivalent extension for Galeon-stlye smart bookmarks but for now it will remain a useful hack. If you want to get your hands dirty with zzzoooooool and try this out after modifying it to use your personal search URLs, please feel free to contact me. For now the only remaining issue I can see is that the forward/back buttons in Iceweasel don't have the explict menu buttons. This is a minor issue for me so now it is time for me to figure out how to migrate my data and config1 before permanently switching away from Galeon. Wish me luck! 1. Of course the data and config are fugly, but that is a something for another, much broader and more complicated hack

2 October 2011

Neil Williams: Uses of Emdebian - special purpose computers

A continuation of my intermittent series on Emdebian and prompted by a query from Paul Wise, I thought I'd cover some of the uses to which a device running Emdebian can be put. It is a bit long this one...

So, yes, there are general purpose computers which could run Emdebian but there are common features amongst general purpose computers:
  1. Lots of different tasks must be supported - lots of different packages installed, each user could quite easily use a distinct set of packages from another user.

  2. Multi-user support - even if any one machine is only used by one person, a general purpose computer must have support for adding another user - not just a system user, a real user with a login, shell, possibly bookmarks and browser history etc.

  3. Connectivity - general purpose computers (and that includes mobile phones) must connect to a variety of other computers to be at all useful.

  4. Multi-modal data input - there needs to be support for a mouse and a keyboard or touchscreen and just as likely, software support for accessibility interfaces like an on screen keyboard and switch inputs.

  5. Storage for user data - the system admin can't know how much data users are going to add and if it is general purpose, it's probably going to support media players, image viewers, even office software. The data files managed and written by such software tend to be large and tend to accumulate into large collections. This means that a general purpose machine cannot anticipate the full storage requirements, so has to allow plenty of room for user data storage. If there's so much storage space available, why use an OS which is primarily designed to take up less space? Just use the full version, Debian.


Now compare with the common features of special purpose or single-purpose computers:
  1. Single task only - a splash screen (without support for removal) covers all the boot information, the single task starts as a daemon and if the user quits the task, the task initiates a shutdown, not quit. Every effort is made to prevent the task from exposing the underlying OS, including auto-restarts if it does crash.

  2. Single-user support - indeed, user is not the typical user. user in this case is a non-login user without a shell, created by the postinst of the task package purely to support running the task without root privileges (because if the task doesn't need privileges, it shouldn't retain privileges).

  3. Single mode input - don't expect these machines to support 105 key keyboards. You'll be lucky to have 5 keys which are really just buttons. The button won't just enter 'J' into the machine, it will fire off a command to charge your bank or start a drill or shutdown a malfunctioning service. (Each button will often be multi-functional as well, depending on context). Some devices will replace the mouse with a touchscreen but many are fitted in places where touchscreens simply won't survive, where keys have to be able to withstand being submerged in water (or oil) for hours. Many machines simply won't have mouse support at all. Many will not have any typical external connectors. Indeed, units which go into dirty or wet environments certainly won't have standard external connectors of any kind. Think boats and factories. Internal battery packs with custom hot-fit replacements, buttons encased in 4mm of clear plastic, screens viewable only through protective mesh / filters.

  4. Restricted connectivity - if the task doesn't need networking, no networking hardware is fitted. If the task doesn't need USB, no USB hardware is fitted. If the task doesn't need serial connections, there will probably be an internal serial connector but also a Warranty void if removed sticker. Don't expect an RS232 inside the case either, this will be a flat cable connector which you'll need to attach to a breakout board which correctly connects the cable wires to the RS232 port pins. That mapping is not standardised (why would it? only service engineers are approved to use it.)

  5. Constrained user data - the task is in complete control and determines what the user can view, store and access. Storage can be expensive and a power drain, so if the task doesn't need Gb of storage, don't fit the machine with Gb of storage. Profit.


So what are these mysterious single-use computers running Emdebian? It's hard to tell because the task has no particular reason to tell the user anything about the OS. Perhaps a different way to phrase the same question would be: Why use a general purpose OS for a special purpose computer?.


This is why Emdebian Grip is popular for these machines - because Emdebian Grip is binary compatible with the equivalent Debian suite and when a bug appears in the high level user interface, it is much easier to debug that on the desktop than on device. Debian stable is a fantastic OS for commercial development - the stability of the OS makes detection of interface bugs much easier. It is rare that developers have to investigate whether bugs in their UI come down to a bug in the underlying OS. That saves time.

The machines themselves? I'm sure others can come up with other examples but these are some which come to my mind (no particular order):


Anywhere where the device simply needs to DoTheRightThing in a variety of unpredictable circumstances, you're going to need a general purpose OS to gather the data and produce the right result. Anywhere where the device has only a single task, you're going to want to not provide access to the rest of the system. There's no point providing "general purpose access" when the device doesn't have "general purpose hardware". There's no point fitting general purpose hardware if the device does not use it. That is a waste of money, a waste of resources and causes unnecessary delays if one component or other becomes unobtainable or changes behaviour. Limit your exposure to hardware bugs and get the product out to the user more quickly and more reliably, everyone is happy. Fit non-standard hardware and you can fit custom hardware which better matches the profile of the device usage - why use standard speakers capable of CD quality across a vast frequency range when the device only makes very very loud alert noises which can be done at lower sample rates and with narrow range, high amplitude, speakers.

There are all sorts of estimates for the number of computers now in the average Western home. It's worth noting that the vast majority of those computers are not PCs or laptops or even game consoles and routers - all instances of general purpose machines with general purpose access / connectivity. A lot of the computers in your home are embedded in machines like white goods, media controllers (set top boxes and the like). Some of these will still use low level firmware written entirely in assembly or COBOL. More complex ones already have a general purpose OS inside yet constrain that OS with a simple, user-friendly interface which is tailored to the hardware actually fitted. It might be cool to connect to your set top box over Bluetooth but really, is that actually going to sell more units? Only if there are other high-end services written for the device which the average customer won't want to see pushing up the price of the base unit. Some machines (particularly in automotive uses) will need to be using Real Time operating systems and Safety Critical operating systems but even some of those are based on general purpose systems - just old versions which have been thoroughly tested.

Finally, consider that for a lot of these devices, the customer is not you. The customer is another business who then put the device into something else which is installed into a factory production line which produces something which goes into creating a consumer product you find in Tesco or on Amazon or bundled in with another service like your TV/broadband contract. There is absolutely no reason for these devices to provide general purpose computing to you, even if the device itself is part of something else which can provide such services.

13 August 2011

Neil Williams: Lintian support used in Emdebian

OK, this one is meant for Planet Debian...

One of many, many changes in the latest lintian is vendor profiles and, thanks to a heads-up by Niels Thykier, Emdebian will have working profile support in the next upload of emdebian-grip. (The only reason it's not already in Debian is my own fault for not uploading when I thought I had the time to upload.)
$ lintian --profile emdebian-grip drivel_3.0.2-1em1_amd64.deb 
$ lintian --profile debian drivel_3.0.2-1em1_amd64.deb
E: drivel: debian-changelog-file-missing
E: drivel: copyright-file-compressed
W: drivel: copyright-without-copyright-notice
E: drivel: description-contains-invalid-control-statement
W: drivel: binary-without-manpage usr/bin/drivel

So the em1 version implements Emdebian Policy for Emdebian Grip. Clean for Emdebian Grip, just as the Debian package is clean prior to the changes.

I expect this to dramatically improve the processing of Emdebian packages, both for Grip and for the cross-built flavour, Crush, once that actually starts up.

Thanks to the lintian team for this support. Now if there was some way of backporting this version of lintian to Squeeze, I could also use this at work to suppress really annoying lintian warnings about non-standard suite names. (The whole point of using a non-standard suite name is to keep our stuff separate from the normal Debian/Emdebian stuff for licensing reasons etc.) Update: of course, I didn't check the PTS for lintian before writing that, so hence didn't notice that the backport already exists! Thanks again to Niels for the prompt. I've now got another package to create for work. ;-) Update2: Thanks to Joerg Jaspert for the tip that all lintian versions get backported directly from unstable as an exception on ~bpo. The work package is ready, so this is going to make things a lot easier when building stuff on stable.

In other news, the same version of emdebian-grip will include support for integrating Emdebian Grip into Debian itself. This too will use vendor-specific support, this time an internal vendor name which just needs to work on the "buildd".

(It's not quite a buildd, Emdebian Grip doesn't build anything, it's all post-processing. It's just that the processing of Debian packages for Emdebian Grip will look a bit like having a second buildd working on the packages uploaded by the existing buildd. The process itself is still developing...)

Neil Williams: multistrap runtime translations

Multistrap runtime message translations need updating but the manpages aren't being done this time around. More changes are likely before those get done.Current runtime translation status:

language translated fuzzy untranslated
-----------------------------------------------------
da 103 4 4
fr 54 17 40
pt 54 17 40
vi 54 17 40

http://lists.debian.org/debian-i18n/2011/08/msg00105.html
Check with the relevant debian-i18n team for your language, this is just a prompter in case people miss the mailing list post.Update: Hmm, that went to the wrong blog. It got tagged i18n but was meant for the i18n blog. Ah well, doesn't hurt for Planet Debian to see what is going on elsewhere....

25 June 2011

Neil Williams: lintian profiles for Emdebian

Just been testing the upcoming lintian 2.5.2 vendor-profile branch for operations with packages processed by emdebian-grip, both for the emdebian-grip and the emdebian-baked vendors.

The good news is that instead of the problematic support from Lenny, lintian in Wheezy will have support for testing Emdebian packages using Emdebian Policy. All Debian tags are up for grabs, no more exceptions for checks which couldn't be disabled before (like the forced removal of manpages).

Also been able to work out how to sort out lintian checks for Emdebian Crush including adding the extra tags which haven't seen use since Lenny.

So that's outline lintian support for Grip, Baked and Crush. Thanks to Niels Thykier for the lintian support.

12 June 2011

Neil Williams: Ideas for TDeb packages

This is a follow-up on TDebs with some detail of how existing packages can be adapted to support TDebs.

$ cat debian/package-tdeb.tdeb

[gettextdir] po
[gettextname] package
[po4apodir] doc/po
[po4amandir] doc/package/man/
[po4acommand] po4a-build


What's happening then? Well, the first thing to take into account is that a lot of the values for these settings will be dictated by upstream, which is why it is necessary for maintainers to add this information as a support file in debian/. I tried determining all this information automatically but the script got far too long and still was far from complete.
Other changes
The (currently unsupported in Debian) XC-Package-Type: tdeb option is used in debian/control to mark a new or existing package as a TDeb. To use an existing package, it must comply with the rules for a TDeb including that it must be Architecture: all. The other rules centre around build issues. If any of the files in the existing package would require execution of any part of the normal build process of the package, these would have to be moved out of the package before that package could be a TDeb. The reason is that TDeb generation by translators needs to not require a full build environment for every individual package and must not risk modifying package contents in ways which are related to the build environment. So, unchanged images and other architecture-independent files are OK, any files which need build-dependencies are a problem. I don't know how many packages there are out there with such files. If you have a -data or -common package which contains translated data and that package also includes architecture-independent files, I'd be interested in knowing the name of the packages involved. If those packages also include files which are generated or modified within the package build, I'll be particularly interested in knowing about those source packages.
debconf
The most common use of TDebs during a release cycle is probably going to be debconf template updates by Christian and these are handled without needing a listing in the .tdeb file.
Other formats
I'm looking at QtLinguist support and it should be fairly straightforward using the .tdeb file arrangement.

Once that's done (in a branch, please - these can't be uploaded yet), the package can be tested with the soon-to-be-uploaded emdebian-tdeb package. The old content has been moved into the emdebian-grip-server package and the dependencies of the old version stripped back. The package contains two scripts dpkg-gentdeb and dh_gentdeb which I hope to get included into the respective packages in time for Wheezy. The idea will be that build systems (dh7, CDBS, debhelper-plain etc.) includes a call to the relevant script. (dh_ is just a wrapper around dpkg- in this case). This would allow maintainers to create the first TDeb, get that through NEW and then the fun starts. Once the translation mechanism is set up, I would like a nominated person (likely to be Christian if he agrees) to co-ordinate translation updates for particular packages to allow for a single upload including updates for many translations. This would be package_version+t1_all.tdeb and an appropriate .diff.gz or debian.tar.gz. No binary upload, no impact on testing migrations.

This is mostly theory right now but the scripts do exist in Emdebian SVN.

Still more testing to do but I hope to upload these scripts to unstable in the next few days.

10 June 2011

Neil Williams: Putting the magic smoke back into the ring main

Haven't been in my new home that long and through a long chain of contacts, my neighbour was finally able to contact me. (Just goes to show that having Debian friends nearby who like board games turns out to make some things JustWork.)
So, when I got the message and got home the flat was fine - just with no electrical power. Murphy's Law ensured that my laptop was also low on power, as was my mobile phone and my landline is cordless and the router was, naturally, not working. My communication options were suddenly rather limited.

The 50yr old wiring in this block finally surrendered the magic smoke and caused a fire downstairs (mostly melted insulation) before it disconnected. I got home around 1pm, electricians to fix the problem arrived about 5pm and haven't finished quite yet. (It's now 12:11am). A relatively large hole in the lawn was dug (a hard task in itself seeing as this area is officially in a drought and the ground is like iron), two junction boxes rebuilt, new connection made from downstairs to upstairs, the work just went on and on.

I'll get some accurate details when I manage to speak to my brother - I do a little bit of hardware but I deal in 1W or a few hundred mA, when a 80A cable melts I'm just a bit out of my depth. I've not seen a fuse that size since I left school.

Thanks to those Debian and non-Debian people who helped me actually contact the relevant people today. I was due to get so many things done today, instead I got through the worry to end up incredibly bored - it turns out to be surprisingly difficult to find things to do when there is no mains power, all the batteries in mobile devices are flat and it's too dark to read.
:-(

8 June 2011

Neil Williams: TDeb processing...

Just added a TDeb test build branch for multistrap to test out the changes in the next version of the emdebian-tdeb binary package - specifically:

Move em_installtdeb into the emdebian-grip-server package so that the emdebian-tdeb package can be slimmed down to only contain what will become useful for other maintainers to use in their debian/rules files and for translators when the time comes to implement DEP-4. CDBS rules / patches to follow... bug reports against dpkg and debhelper to follow ...

DEP-4 itself is being restarted and refreshed but with some problems on alioth currently, I'm going to (try and) keep the document up to date via SGML and people.debian.org.

The changes are now in Emdebian SVN and I'm starting testing with my own packages first. Multistrap for PO and po4a-build testing, dpkg-cross for debconf testing, some others for testing with build systems other than CDBS, then the discussions will restart on debian-devel and I'll upload the TDeb support scripts to unstable.

There are a variety of things which need to happen before these scripts become useful, including:

Why is the package called emdebian-tdeb? Simple - that's where the scripts have lived until now and the hope is that dpkg-gentdeb can migrate into dpkg-dev and dh_gentdeb into debhelper - eventually.

By all means play with the scripts and follow the example multistrap branch above to see what kind of changes might be needed in your own packages. I'll post again when I've worked out the changes needed in packages like dpkg-cross which only need TDebs for debconf support. (This is the easiest mode of TDeb usage but it's also the most useful during a release cycle - maintainers make a single change and then a nominated i18n boss-person - take a guess who - can update the all your debconf translations in one upload without a binary build - eventually possibly not even needing any bug reports, just a simple email advance warning.

What I've got to work on still is how the generation of an updated TDeb creates a new debian diff / debian.tar.gz to accompany the upload. For that I need XC-Package-Type: tdeb support in dpkg....

Sample output from the viewpoint of a translator updating a package already converted to support a TDeb:


neil@sylvester:tdeb-test$ dpkg-gentdeb -t
Set the changelog message to version 2.1.15+t0 first, closing any relevant bugs!
dch -v 2.1.15+t0
neil@sylvester:tdeb-test$ dch -v 2.1.15+t0 TDeb test
neil@sylvester:tdeb-test$ dpkg-gentdeb -t
Building TDeb update: 2.1.15+t0
(192 entries)
Discard doc/pod/1/da/multistrap (65 of 177 strings; only 36.72% translated; need 50%).
Discard doc/pod/1/da/device-table.pl (5 of 19 strings; only 26.31% translated; need 50%).
Processing untranslated files for pod/multistrap (1) . . .
Processing untranslated files for device-table.pl (1) . . .
Processing de translations for pod/multistrap (1). . .
Processing de translations for device-table.pl (1). . .
Processing fr translations for pod/multistrap (1). . .
Processing fr translations for device-table.pl (1). . .
Processing pt translations for pod/multistrap (1). . .
Processing pt translations for device-table.pl (1). . .
dpkg-gencontrol -pmultistrap-tdeb -Pdebian/multistrap-tdeb -cdebian/control
dpkg --build debian/multistrap-tdeb ../multistrap-tdeb_2.1.15+t0_all.tdeb
dpkg-deb: building package multistrap-tdeb' in ../multistrap-tdeb_2.1.15+t0_all.tdeb'.
neil@sylvester:tdeb-test$ dpkg -c ../multistrap-tdeb_2.1.15+t0_all.tdeb
drwxr-xr-x root/root 0 2011-06-08 21:06 ./
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/locale/
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/locale/pt/
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/locale/pt/LC_MESSAGES/
-rw-r--r-- root/root 6427 2011-06-08 21:06 ./usr/share/locale/pt/LC_MESSAGES/multistrap.mo
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/locale/vi/
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/locale/vi/LC_MESSAGES/
-rw-r--r-- root/root 6963 2011-06-08 21:06 ./usr/share/locale/vi/LC_MESSAGES/multistrap.mo
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/locale/fr/
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/locale/fr/LC_MESSAGES/
-rw-r--r-- root/root 6789 2011-06-08 21:06 ./usr/share/locale/fr/LC_MESSAGES/multistrap.mo
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/locale/da/
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/locale/da/LC_MESSAGES/
-rw-r--r-- root/root 16311 2011-06-08 21:06 ./usr/share/locale/da/LC_MESSAGES/multistrap.mo
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/man/
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/man/pt/
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/man/pt/man1/
-rw-r--r-- root/root 33963 2011-06-08 21:06 ./usr/share/man/pt/man1/multistrap.1
-rw-r--r-- root/root 4471 2011-06-08 21:06 ./usr/share/man/pt/man1/device-table.pl.1
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/man/vi/
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/man/fr/
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/man/fr/man1/
-rw-r--r-- root/root 33969 2011-06-08 21:06 ./usr/share/man/fr/man1/multistrap.1
-rw-r--r-- root/root 4708 2011-06-08 21:06 ./usr/share/man/fr/man1/device-table.pl.1
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/man/da/
drwxr-xr-x root/root 0 2011-06-08 21:06 ./usr/share/man/da/man1/


I'm also going to DebConf11



On the RoadTrip! all the way hrough France, Belgium, Netherlands, Germany, Austria, Slovenia, Croatia, Bosnia-Herzegovina (Banja Luka) in an MX5.

I've got a lot of things to talk to people about @ DebConf this year... you won't be able to tell if I'm just buying you a beer or trying to badger you about your packages!! Just note that TDebs have been already agreed (in terms of how TDebs work in the archive) with ftp-master at a previous meeting in Extremadura and I'm being badgered by a Release Manager and an ex-DPL to get TDebs implemented real soon now, so it is going to happen and hopefully in time for Wheezy - it's just a matter of fixing the bugs. The quicker everyone has a look at the new scripts (which have been completed re-written and immensely simplified since I first tried a dumb conversion of em_installtdeb) and the likely changes needed in packages, the quicker I can get the bugs out of the scripts.

More documentation to follow, I'll be updating the SGML whenever practical and the actual DEP just as soon as I get commit access back.

24 March 2011

Neil Williams: Qt-embedded [armel] for Emdebian

I'm experimenting with building Qt Embedded for armel using Emdebian toolchains and Debian packaging/patches. Initially, I've just "got it to build" with a few too many things nobbled and disabled - the results (using the package from Squeeze) are in Emdebian SVN - and the next task is to enable the missing bits and get closer to the standard Qt4-X11 binaries. Certain packages will simply not exist, even when finished, but it's been fun working out how to create a qmake.conf to map the Emdebian toolchain to what Qt expects from toolchains like CodeSourcery.

If it continues to work, it may be worth pushing upstream - but it may be better to wait until Multi-Arch supports cross-building, I'd only have to submit a different qmake.conf with updated paths otherwise.

I've also got a basic qmake.conf to help Qt applications cross-build using the Emdebian toolchains. This also needs updating once we finally get Multi-Arch-Cross.

The way I see this working is that the Qt Embedded packages would have to kept entirely separate from the Debian Qt4-X11 binaries from the KDE/Qt team because they use the same package names. That's relatively simple if the embedded packages are kept for cross-building only, either as -armel-cross packages generated by dpkg-cross or (eventually) as Multi-Arch packages. What won't work is having Qt-X11 and Qt-Embedded installed on the same architecture, without using a chroot or similar.

Initially, I hoped to keep the changes small enough that the Debian package could include the tweaks in a conditional part of debian/rules via a DEB_BUILD_OPTION. The current diff is a little too awkward (mainly in the .install files) but that's still worth keeping as a goal.

No warranty, created in the hope it's useful etc. etc., nothing to say it'll build on your system, no support for anything except the Debian armel port and only tested with the Emdebian toolchains from Lenny, not coming to a distribution near you anytime soon, don't blame me if it messes up your entire Qt-X11 installation on your desktop.... yada, yada.

It's just a bit of tinkering really, although it would be more fun if Qt didn't take quite so many hours to build each time I want to test a change....

6 March 2011

Neil Williams: Checking build-dependencies

I've been nagged for a while (by Wookey mainly) about the unhelpfulness of dpkg-checkbuilddeps when it outputs the versions and alternatives alongside the package names which are missing, making it harder to pass the list to the package manager. apt-get build-dep isn't particular helpful either - it doesn't look at the modified package at all.

Of course, once the output is in a suitable format, a new script might as well make it possible to simply pass that output to said package manager. Once it can do that, it can then pass the output to the cross-dependency tools, like xapt.

So, I've refactored the embuilddeps script in the xapt package to do just this.

It's gained a few features in the process:

  1. Support for Build-Conflicts resolution

  2. Support for virtual packages, swapping the virtual for the first package to provide it (borrowed some code from Dpkg::Deps for that one).

  3. Support for Build-Depends alternatives (currently using the buildd default of "first alternative gets first chance")

  4. Reads data from debian/control, not the apt cache - to help with the package you're building instead of the one you've already uploaded.

  5. Handles cross dependencies (which are always assumed to not currently be installed) and native dependencies. This support is transitory until such time as enough packages are Multi-Arch compatible that Cross-Multi-Arch becomes trivial.

  6. Support for being used as a build-dependency resolver in pbuilder, including cross-architecture dependencies with pdebuild-cross.

  7. Can locate a debian/control file in a specified directory without needing to be called from that directory

  8. Checks your apt-cache policy to see if the required version of a package is available from your current apt sources. Fails completely if not. (The pdebuild-cross usage will need that to be extended a touch to look at the apt-cache policy from within the chroot.)

  9. Hector Oron has also been asking me to get embuilddeps working with sbuild, so I'm working on that feature too.

  10. verbose and quiet support (so use -q inside other scripts)

  11. most output is already translated - more translations are welcome, especially for the documentation, but hold on until this version has actually been uploaded.



More testing is needed, particularly that the extensive refactoring hasn't broken the pbuilder resolution support and then looking at what still needs to be done for sbuild support. The new script is in Emdebian SVN.

The first-choice method for virtuals and alternatives may well bear extension to explicit management via command-line options. I'm unsure yet whether it needs to be a configuration file setting. It could simply be a recursive - try one, move on - model.

26 February 2011

Neil Williams: Why software patents differ from pharmaceutical patents

I was going to make this a comment against Ritesh Raj Sarraf's post about patents and pharmaceuticals but the comment got way too long.

The basic problem is that patents on pharmaceuticals cannot be directly compared to patents on software and the explanation gets a bit involved. It's worse than apples and oranges, it's more like comparing an algorithm to a piece of DNA.

Patented pharmaceuticals must declare the precise chemical formula of the molecule (and a whole lot else besides) before the molecule can be brought to market but the company holding the patent is legally prevented from protecting themselves against slightly different molecules which are blatantly based on the protected molecule. All modification results in a new patentable product, legally separated from the original.

Therefore, the first company to patent a molecule from a new class is instantly joined in the field by other companies who are (obviously) watching the research news channels very carefully. (In their shoes, you would do the same or you go bust.) These companies simply add a flourine ion or change a hydrogen to a hydroxl group or any other number of minor, apparently inconsequential, changes. Changes which, in software, would be deemed a derivative work and would be roughly equivalent to changing a single local variable from a 16bit integer to a 32bit integer in a single function in a single file in the entire codebase. i.e. a trivial bug fix or even a typo.

A molecule with a couple of hundred atoms can have a change affecting one atom and be a completely new patentable item without any risk of the other company being sued for patent infringement, plagiarism, copyright violation (there is no copyright) or any other barrier. It's common to find that molecule foo is joined on the market by a desoxyfoo and a flourofoo etc. with a chlorofoo just around the corner, within months of the class-leader being launched. It can't be compared to software - effectively, every new release gets a new patent which covers the entirety of everything contained in that release, even if 99.999999999% of it is precisely the same as is covered under other several other patents held by extremely litigious competitors.

Modification is simply not prohibited - copying without making any changes whatsoever to the active molecule is prohibited. Changes to the inactive ingredients will be prosecuted - changes to the active ingredient will not. Comparing this with software is likely to drive you insane.

For example, the mere process of packaging an upstream release for a distribution would be patent infringement in pharmaceutical terms - UNLESS you deliberately patch the original source to make an utterly trivial and apparently pointless change. At that point, the upstream completely disown your entire release and you would have to fix every bug yourself - without the original upstream even telling you whether they fixed the same bug themselves, because they aren't ABLE to modify the upstream code without starting a whole new project and they won't tell you why or how they changed things. There is no such thing as an alpha-release of a molecule. Every release is final and no subsequent modification is possible - unless it only affects the inactive ingredients, e.g. by adding a slow release coating. These changes can affect the way that the molecule acts on the test subject / patient but must not change the active ingredient itself in any identifiable way. So the upstream can make a new wrapper (new packaging) but nobody else is allowed to do so until the patent expires. Once the patent expires, every one and their dog is allowed to make identical copies of the active ingredient in their own inactive packaging and these are called generics. Generics still need testing to ensure that the active ingredient truly is the same and that the generic behaves in the same way as the original but this process is cheaper than making a change to the active molecule.

A changed active molecule is an entirely new substance, it gets a new patent and it needs to go through largely the same amount of testing as the patented original but the new company still gets a head start on what the molecule is likely to do and what side-effects (by now noticeable with the class-leader product which is in large scale trials or even on the market) would be harmful to the future usefulness of their new, derivative, molecule. The new company simply jumps onto the bandwagon shouting "I want a piece of that action!" and there is absolutely nothing the original company can do about it, except try to show that their original molecule is better than the new competitors and to try and build a substantial market share before everyone switches to the new derivative. (Prescribers are human and tend therefore to be lazy, so unless there's a reason to change, the class leader will still have substantial share even after several derivatives exist.)

It's an arms race - each company tries to create as many of these potential "me-too" variants as possible in-house, put as many as can be (financially) supported through trials, keep pursuing those which don't have prohibitive side-effects / problems in tests and hope that, at the end, one of their versions will actually turn out to have better effectiveness or fewer side-effects. If not, they just have to market one as "as-effective-and-no-more-harmful-but-CHEAPER-than" the class leader. The majority of "me-too" molecules come under this category - even if the company concerned would try to point to infinitesimal differences in trial data to pretend otherwise. All this with the knowledge that the very hour that the patent expires, a dozen generic manufacturers will flood the market with copies which do not need to recover the massive costs of the original research and are therefore massively cheaper.

How much cheaper? patented molecules may cost 30 for 28 doses on Monday and one generic may cost 3p for 500, another may cost 40p for 10,000 on Tuesday, the day after the patent expiry. Generic houses only need to cover the actual costs of manufacture, not the costs of the original patent etc. and by the time the patent expires, the efforts of the patent holder to generate market share mean that economies of scale become evident. The absolute cheapest isn't necessarily the one who gets the contract either, these things all have expiry dates and all 10,000 have to be sold / supplied from a single location within that date to actually deliver the full savings. So then you get satellite generics who buy in quantities of a few million and pack down into 56 or less so that the total purchased quantity can sit on many shelves instead of just one. It's the same tablet in the end, with the same markings too. All that differs is the marking on the foil and the colour of the cardboard. (Oh and the same foil and colour cardboard can contain tablets with different markings from different manufacturers from one batch to the next, all within the branding of the same repackaging company.)

The original patent-holding company also has an interest in generating their own "me-too" derivatives, sometimes marketed under the name of a company which turns out to be solely owned by the parent of the original company. It can get horribly recursive.
Sometimes, the derivative from the sibling company hits the market before the "true" competitor from a different company - simply because all companies try to have many molecules in development at any one time (because most changed molecules fail in testing) and sometimes they get lucky and come up with two v.similar molecules and have to do something to justify the cost of developing both. That can turn out to be a poisoned chalice rather than two golden geese.

Sometimes, the company fails to get enough variants through testing and the company then gets bought out by a rival - who then gains their research and continues experimenting with other derivative molecules based on the ones already tested and dismissed. The limiting factor is the money.

At this point, I must now declare my cards. I used to be a pharmacist and although I am still entitled to call myself the holder of a Batchelor of Sciences degree in Pharmaceutical Sciences I am most definitely no longer a pharmacist. I therefore defer to my still-registered colleagues in case I've messed up any of the detail here. The above cannot be declared as the opinion of a pharmacist. I am an ex-pharmacist who voluntarily chose to resign his registration to take up full time work as a software developer rather than pay the exorbitant costs of maintaining such registration and all the "proof-of-competency" requirements which such registration requires.

Finally, before you ask, I am very happy to be an ex-pharmacist - my erstwhile colleagues have my ongoing sympathy but no matter how much some of them may beg, they all know that I will not be rejoining them on that side of the dispensary bench if I've got any choice in the matter.

Software is just better. It's all just recycled electrons.

Next.

Previous.